42 research outputs found

    Artificial Intelligence for Global Health: Learning From a Decade of Digital Transformation in Health Care

    Get PDF
    The health needs of those living in resource-limited settings are a vastly overlooked and understudied area in the intersection of machine learning (ML) and health care. While the use of ML in health care is more recently popularized over the last few years from the advancement of deep learning, low-and-middle income countries (LMICs) have already been undergoing a digital transformation of their own in health care over the last decade, leapfrogging milestones due to the adoption of mobile health (mHealth). With the introduction of new technologies, it is common to start afresh with a top-down approach, and implement these technologies in isolation, leading to lack of use and a waste of resources. In this paper, we outline the necessary considerations both from the perspective of current gaps in research, as well as from the lived experiences of health care professionals in resource-limited settings. We also outline briefly several key components of successful implementation and deployment of technologies within health systems in LMICs, including technical and cultural considerations in the development process relevant to the building of machine learning solutions. We then draw on these experiences to address where key opportunities for impact exist in resource-limited settings, and where AI/ML can provide the most benefit.Comment: Accepted Paper at ICLR 2020 Workshop on Practical ML for Developing Countrie

    Comparative Performance Analysis of Different Fingerprint Biometric Scanners for Patient Matching

    Get PDF
    Unique patient identification within health services is an operational challenge in healthcare settings. Use of key identifiers, such as patient names, hospital identification numbers, national ID, and birth date are often inadequate for ensuring unique patient identification. In addition approximate string comparator algorithms, such as distance-based algorithms, have proven suboptimal for improving patient matching, especially in low-resource settings. Biometric approaches may improve unique patient identification. However, before implementing the technology in a given setting, such as health care, the right scanners should be rigorously tested to identify an optimal package for the implementation. This study aimed to investigate the effects of factors such as resolution, template size, and scan capture area on the matching performance of different fingerprint scanners for use within health care settings. Performance analysis of eight different scanners was tested using the demo application distributed as part of the Neurotech Verifinger SDK 6.0

    Few-Shot Transfer Learning to improve Chest X-Ray pathology detection using limited triplets

    Get PDF
    Deep learning approaches applied to medical imaging have reached near-human or better-than-human performance on many diagnostic tasks. For instance, the CheXpert competition on detecting pathologies in chest x-rays has shown excellent multi-class classification performance. However, training and validating deep learning models require extensive collections of images and still produce false inferences, as identified by a human-in-the-loop. In this paper, we introduce a practical approach to improve the predictions of a pre-trained model through Few-Shot Learning (FSL). After training and validating a model, a small number of false inference images are collected to retrain the model using \textbf{\textit{Image Triplets}} - a false positive or false negative, a true positive, and a true negative. The retrained FSL model produces considerable gains in performance with only a few epochs and few images. In addition, FSL opens rapid retraining opportunities for human-in-the-loop systems, where a radiologist can relabel false inferences, and the model can be quickly retrained. We compare our retrained model performance with existing FSL approaches in medical imaging that train and evaluate models at once

    CVAD: A generic medical anomaly detector based on Cascade VAE

    Get PDF
    Detecting out-of-distribution (OOD) samples in medical imaging plays an important role for downstream medical diagnosis. However, existing OOD detectors are demonstrated on natural images composed of inter-classes and have difficulty generalizing to medical images. The key issue is the granularity of OOD data in the medical domain, where intra-class OOD samples are predominant. We focus on the generalizability of OOD detection for medical images and propose a self-supervised Cascade Variational autoencoder-based Anomaly Detector (CVAD). We use a variational autoencoders' cascade architecture, which combines latent representation at multiple scales, before being fed to a discriminator to distinguish the OOD data from the in-distribution (ID) data. Finally, both the reconstruction error and the OOD probability predicted by the binary discriminator are used to determine the anomalies. We compare the performance with the state-of-the-art deep learning models to demonstrate our model's efficacy on various open-access medical imaging datasets for both intra- and inter-class OOD. Further extensive results on datasets including common natural datasets show our model's effectiveness and generalizability

    CVAD - An unsupervised image anomaly detector

    Get PDF
    Detecting out-of-distribution samples for image applications plays an important role in safeguarding the reliability of machine learning model deployment. In this article, we developed a software tool to support our OOD detector CVAD - a self-supervised Cascade Variational autoencoder-based Anomaly Detector , which can be easily applied to various image applications without any assumptions. The corresponding open-source software is published for better public research and tool usage

    Human Activity Recognition using Deep Learning Models on Smartphones and Smartwatches Sensor Data

    Get PDF
    In recent years, human activity recognition has garnered considerable attention both in industrial and academic research because of the wide deployment of sensors, such as accelerometers and gyroscopes, in products such as smartphones and smartwatches. Activity recognition is currently applied in various fields where valuable information about an individual’s functional ability and lifestyle is needed. In this study, we used the popular WISDM dataset for activity recognition. Using multivariate analysis of covariance (MANCOVA), we established a statistically significant difference (p < 0.05) between the data generated from the sensors embedded in smartphones and smartwatches. By doing this, we show that smartphones and smartwatches don’t capture data in the same way due to the location where they are worn. We deployed several neural network architectures to classify 15 different hand and non-hand oriented activities. These models include Long short-term memory (LSTM), Bi-directional Long short-term memory (BiLSTM), Convolutional Neural Network (CNN), and Convolutional LSTM (ConvLSTM). The developed models performed best with watch accelerometer data. Also, we saw that the classification precision obtained with the convolutional input classifiers (CNN and ConvLSTM) was higher than the end-to-end LSTM classifier in 12 of the 15 activities. Additionally, the CNN model for the watch accelerometer was better able to classify non-hand oriented activities when compared to hand-oriented activities

    Margin-Aware Intra-Class Novelty Identification for Medical Images

    Get PDF
    Purpose: Existing anomaly detection methods focus on detecting interclass variations while medical image novelty identification is more challenging in the presence of intraclass variations. For example, a model trained with normal chest x-ray and common lung abnormalities is expected to discover and flag idiopathic pulmonary fibrosis, which is a rare lung disease and unseen during training. The nuances of intraclass variations and lack of relevant training data in medical image analysis pose great challenges for existing anomaly detection methods. Approach: We address the above challenges by proposing a hybrid model—transformation-based embedding learning for novelty detection (TEND), which combines the merits of classifier-based approach and AutoEncoder (AE)-based approach. Training TEND consists of two stages. In the first stage, we learn in-distribution embeddings with an AE via the unsupervised reconstruction. In the second stage, we learn a discriminative classifier to distinguish in-distribution data and the transformed counterparts. Additionally, we propose a margin-aware objective to pull in-distribution data in a hypersphere while pushing away the transformed data. Eventually, the weighted sum of class probability and the distance to margin constitutes the anomaly score. Results: Extensive experiments are performed on three public medical image datasets with the one-vs-rest setup (namely one class as in-distribution data and the left as intraclass out-of-distribution data) and the rest-vs-one setup. Additional experiments on generated intraclass out-of-distribution data with unused transformations are implemented on the datasets. The quantitative results show competitive performance as compared to the state-of-the-art approaches. Provided qualitative examples further demonstrate the effectiveness of TEND. Conclusion: Our anomaly detection model TEND can effectively identify the challenging intraclass out-of-distribution medical images in an unsupervised fashion. It can be applied to discover unseen medical image classes and serve as the abnormal data screening for downstream medical tasks. The corresponding code is available at https://github.com/XiaoyuanGuo/TEND_MedicalNoveltyDetection

    Blood Glucose Level Prediction as Time-Series Modeling using Sequence-to-Sequence Neural Networks

    Get PDF
    The management of blood glucose levels is critical in the care of Type 1 diabetes subjects. In extremes, high or low levels of blood glucose are fatal. To avoid such adverse events, there is the development and adoption of wearable technologies that continuously monitor blood glucose and administer insulin. This technology allows subjects to easily track their blood glucose levels with early intervention without the need for hospital visits. The data collected from these sensors is an excellent candidate for the application of machine learning algorithms to learn patterns and predict future values of blood glucose levels. In this study, we developed artificial neural network algorithms based on the OhioT1DM training dataset that contains data on 12 subjects. The dataset contains features such as subject identifiers, continuous glucose monitoring data obtained in 5 minutes intervals, insulin infusion rate, etc. We developed individual models, including LSTM, BiLSTM, Convolutional LSTMs, TCN, and sequence-to-sequence models. We also developed transfer learning models based on the most important features of the data, as identified by a gradient boosting algorithm. These models were evaluated on the OhioT1DM test dataset that contains 6 unique subject’s data. The model with the lowest RMSE values for the 30- and 60-minutes was selected as the best performing model. Our result shows that sequence-to-sequence BiLSTM performed better than the other models. This work demonstrates the potential of artificial neural networks algorithms in the management of Type 1 diabetes

    Few-Shot Transfer Learning to improve Chest X-Ray pathology detection using limited triplets

    Get PDF
    Deep learning approaches applied to medical imaging have reached near-human or better-than-human performance on many diagnostic tasks. For instance, the CheXpert competition on detecting pathologies in chest x-rays has shown excellent multi-class classification performance. However, training and validating deep learning models require extensive collections of images and still produce false inferences, as identified by a human-in-the-loop. In this paper, we introduce a practical approach to improve the predictions of a pre-trained model through Few-Shot Learning (FSL). After training and validating a model, a small number of false inference images are collected to retrain the model using \textbf{\textit{Image Triplets}} - a false positive or false negative, a true positive, and a true negative. The retrained FSL model produces considerable gains in performance with only a few epochs and few images. In addition, FSL opens rapid retraining opportunities for human-in-the-loop systems, where a radiologist can relabel false inferences, and the model can be quickly retrained. We compare our retrained model performance with existing FSL approaches in medical imaging that train and evaluate models at once

    MedShift: identifying shift data for medical dataset curation

    Get PDF
    To curate a high-quality dataset, identifying data variance between the internal and external sources is a fundamental and crucial step. However, methods to detect shift or variance in data have not been significantly researched. Challenges to this are the lack of effective approaches to learn dense representation of a dataset and difficulties of sharing private data across medical institutions. To overcome the problems, we propose a unified pipeline called MedShift to detect the top-level shift samples and thus facilitate the medical curation. Given an internal dataset A as the base source, we first train anomaly detectors for each class of dataset A to learn internal distributions in an unsupervised way. Second, without exchanging data across sources, we run the trained anomaly detectors on an external dataset B for each class. The data samples with high anomaly scores are identified as shift data. To quantify the shiftness of the external dataset, we cluster B's data into groups class-wise based on the obtained scores. We then train a multi-class classifier on A and measure the shiftness with the classifier's performance variance on B by gradually dropping the group with the largest anomaly score for each class. Additionally, we adapt a dataset quality metric to help inspect the distribution differences for multiple medical sources. We verify the efficacy of MedShift with musculoskeletal radiographs (MURA) and chest X-rays datasets from more than one external source. Experiments show our proposed shift data detection pipeline can be beneficial for medical centers to curate high-quality datasets more efficiently. An interface introduction video to visualize our results is available at this URL.https://youtu.be/V3BF0P1sxQE
    corecore